The meaningful use of electronic health records (EHR) continues to progress in the digital era with clinical decision support systems augmented by artificial intelligence. A priority in improving provider experience is to overcome information overload and reduce the cognitive burden so fewer medical errors and cognitive biases are introduced during patient care. One major type of medical error is diagnostic error due to systematic or predictable errors in judgment that rely on heuristics. The potential for clinical natural language processing (cNLP) to model diagnostic reasoning in humans with forward reasoning from data to diagnosis and potentially reduce the cognitive burden and medical error has not been investigated. Existing tasks to advance the science in cNLP have largely focused on information extraction and named entity recognition through classification tasks. We introduce a novel suite of tasks coined as Diagnostic Reasoning Benchmarks, DR.BENCH, as a new benchmark for developing and evaluating cNLP models with clinical diagnostic reasoning ability. The suite includes six tasks from ten publicly available datasets addressing clinical text understanding, medical knowledge reasoning, and diagnosis generation. DR.BENCH is the first clinical suite of tasks designed to be a natural language generation framework to evaluate pre-trained language models. Experiments with state-of-the-art pre-trained generative language models using large general domain models and models that were continually trained on a medical corpus demonstrate opportunities for improvement when evaluated in DR. BENCH. We share DR. BENCH as a publicly available GitLab repository with a systematic approach to load and evaluate models for the cNLP community.
translated by 谷歌翻译
美国刑事法律体系越来越依赖软件输出来定罪和被监禁。在每年大量案件中,政府根据统计软件的证据(例如概率基因分型,环境音频检测和工具标志分析工具)做出这些结果决定,以使辩护律师无法完全盘中或审查。这破坏了对抗性刑事法律制度的承诺,该制度依赖于辩方探查和测试起诉案件保护个人权利的能力。为了应对这种软件的对抗性审查输出的需求,我们提出了强大的对抗测试作为审计框架,以检查证据统计软件的有效性。我们通过在强大的机器学习和算法公平的最新作品中绘制大量工作来定义和操作这种强大的对抗性测试的概念。我们演示了该框架如何使审查此类工具的过程标准化,并使辩护律师能够检查其与当前案件最相关的情况的有效性。我们进一步讨论了美国刑事法律制度内的现有结构和机构挑战,该系统可能会造成实施该和其他此类审计框架的障碍,并通过讨论政策变更的讨论可以帮助解决这些问题。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
超越地球轨道的人类空间勘探将涉及大量距离和持续时间的任务。为了有效减轻无数空间健康危害,数据和空间健康系统的范式转移是实现地球独立性的,而不是Earth-Reliance所必需的。有希望在生物学和健康的人工智能和机器学习领域的发展可以解决这些需求。我们提出了一个适当的自主和智能精密空间健康系统,可以监控,汇总和评估生物医学状态;分析和预测个性化不良健康结果;适应并响应新累积的数据;并提供对其船员医务人员的个人深度空间机组人员和迭代决策支持的预防性,可操作和及时的见解。在这里,我们介绍了美国国家航空航天局组织的研讨会的建议摘要,以便在太空生物学和健康中未来的人工智能应用。在未来十年,生物监测技术,生物标志科学,航天器硬件,智能软件和简化的数据管理必须成熟,并编织成精确的空间健康系统,以使人类在深空中茁壮成长。
translated by 谷歌翻译
空间生物学研究旨在了解太空飞行对生物的根本影响,制定支持深度空间探索的基础知识,最终生物工程航天器和栖息地稳定植物,农作物,微生物,动物和人类的生态系统,为持续的多行星寿命稳定。要提高这些目标,该领域利用了来自星空和地下模拟研究的实验,平台,数据和模型生物。由于研究扩展到低地球轨道之外,实验和平台必须是最大自主,光,敏捷和智能化,以加快知识发现。在这里,我们介绍了由美国国家航空航天局的人工智能,机器学习和建模应用程序组织的研讨会的建议摘要,这些应用程序为这些空间生物学挑战提供了关键解决方案。在未来十年中,将人工智能融入太空生物学领域将深化天空效应的生物学理解,促进预测性建模和分析,支持最大自主和可重复的实验,并有效地管理星载数据和元数据,所有目标使生活能够在深空中茁壮成长。
translated by 谷歌翻译
虽然我们注意临床自然语言处理(NLP)的最新进展,但我们可以注意到临床和翻译研究界的一些抵抗,因为透明度,可解释性和可用性有限,采用NLP模型。在这项研究中,我们提出了一种开放的自然语言处理开发框架。我们通过实施NLP算法为国家Covid队列协作(N3C)进行了评估。基于Covid-19相关临床笔记的信息提取的利益,我们的工作包括1)使用Covid-19标志和症状作为用例的开放数据注释过程,2)一个社区驱动的规则集合平台,3)合成文本数据生成工作流程,用于生成信息提取任务的文本而不涉及人为受试者。 Corpora来自来自三个不同机构的文本(Mayo Clinic,肯塔基州大学,明尼苏达大学)。用单个机构(Mayo)规则集进行了金标准注释。这导致了0.876,0.706和0.694的F-Scors分别用于Mayo,Minnesota和肯塔基测试数据集。作为N3C NLP子群体的联盟努力的研究表明,创建联邦NLP算法开发和基准测试平台的可行性,以增强多机构临床NLP研究和采用。虽然我们在这项工作中使用Covid-19作为用例,但我们的框架足以适用于临床NLP的其他兴趣领域。
translated by 谷歌翻译
虽然公平社区已经认识到数据的重要性,但该地区的研究人员主要依赖于UCI成年人的表格数据。来自1994年的美国人口普查调查,该数据集出现在数百名研究论文中,它作为许多算法公平干预措施的开发和比较的基础。我们从可用的美国人口普查源重建了UCI成人数据的超集,并揭示了UCI成人数据集的特质,这些数据集限制了其外部有效性。我们的主要贡献是一套来自美国人口普查调查的新数据集,这扩展了现有数据生态系统进行公平机器学习的研究。我们创建与收入,就业,健康,运输和住房有关的预测任务。数据跨越多年和美国所有国家,允许研究人员研究时间转移和地理变异。我们突出了与公平标准,算法干预措施之间的权衡相关的权衡初始突发的初始突发,以及基于我们的新数据集的分发转移的作用。我们的调查结果通知了正在进行的辩论,挑战一些现有的叙述,并指向未来的研究方向。我们的数据集可在https://github.com/zykls/folktables上获得。
translated by 谷歌翻译
We demonstrate a proof-of-concept of a large language model conducting corporate lobbying related activities. We use an autoregressive large language model (OpenAI's text-davinci-003) to determine if proposed U.S. Congressional bills are relevant to specific public companies and provide explanations and confidence levels. For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation. We use hundreds of ground-truth labels of the relevance of a bill to a company to benchmark the performance of the model, which outperforms the baseline of predicting the most common outcome of irrelevance. However, we test the ability to determine the relevance of a bill with the previous OpenAI GPT-3 model (text-davinci-002), which was state-of-the-art on many language tasks until text-davinci-003 was released on November 28, 2022. The performance of text-davinci-002 is worse than simply always predicting that a bill is irrelevant to a company. These results suggest that, as large language models continue to improve core natural language understanding capabilities, performance on corporate lobbying related tasks will continue to improve. We then discuss why this could be problematic for societal-AI alignment.
translated by 谷歌翻译
Variational autoencoders model high-dimensional data by positing low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencoder useless as a means to produce meaningful representations. Existing approaches to posterior collapse often attribute it to the use of neural networks or optimization issues due to variational approximation. In this paper, we consider posterior collapse as a problem of latent variable non-identifiability. We prove that the posterior collapses if and only if the latent variables are non-identifiable in the generative model. This fact implies that posterior collapse is not a phenomenon specific to the use of flexible distributions or approximate inference. Rather, it can occur in classical probabilistic models even with exact inference, which we also demonstrate. Based on these results, we propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility. This model class resolves the problem of latent variable non-identifiability by leveraging bijective Brenier maps and parameterizing them with input convex neural networks, without special variational inference objectives or optimization tricks. Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译